267 research outputs found

    Robust Kernel-based Feature Representation for 3D Point Cloud Analysis via Circular Graph Convolutional Network

    Full text link
    Feature descriptors of point clouds are used in several applications, such as registration and part segmentation of 3D point clouds. Learning discriminative representations of local geometric features is unquestionably the most important task for accurate point cloud analyses. However, it is challenging to develop rotation or scale-invariant descriptors. Most previous studies have either ignored rotations or empirically studied optimal scale parameters, which hinders the applicability of the methods for real-world datasets. In this paper, we present a new local feature description method that is robust to rotation, density, and scale variations. Moreover, to improve representations of the local descriptors, we propose a global aggregation method. First, we place kernels aligned around each point in the normal direction. To avoid the sign problem of the normal vector, we use a symmetric kernel point distribution in the tangential plane. From each kernel point, we first projected the points from the spatial space to the feature space, which is robust to multiple scales and rotation, based on angles and distances. Subsequently, we perform graph convolutions by considering local kernel point structures and long-range global context, obtained by a global aggregation method. We experimented with our proposed descriptors on benchmark datasets (i.e., ModelNet40 and ShapeNetPart) to evaluate the performance of registration, classification, and part segmentation on 3D point clouds. Our method showed superior performances when compared to the state-of-the-art methods by reducing 70%\% of the rotation and translation errors in the registration task. Our method also showed comparable performance in the classification and part-segmentation tasks with simple and low-dimensional architectures.Comment: 10 pages, 9 figure

    Tooth Instance Segmentation from Cone-Beam CT Images through Point-based Detection and Gaussian Disentanglement

    Full text link
    Individual tooth segmentation and identification from cone-beam computed tomography images are preoperative prerequisites for orthodontic treatments. Instance segmentation methods using convolutional neural networks have demonstrated ground-breaking results on individual tooth segmentation tasks, and are used in various medical imaging applications. While point-based detection networks achieve superior results on dental images, it is still a challenging task to distinguish adjacent teeth because of their similar topologies and proximate nature. In this study, we propose a point-based tooth localization network that effectively disentangles each individual tooth based on a Gaussian disentanglement objective function. The proposed network first performs heatmap regression accompanied by box regression for all the anatomical teeth. A novel Gaussian disentanglement penalty is employed by minimizing the sum of the pixel-wise multiplication of the heatmaps for all adjacent teeth pairs. Subsequently, individual tooth segmentation is performed by converting a pixel-wise labeling task to a distance map regression task to minimize false positives in adjacent regions of the teeth. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art approaches by increasing the average precision of detection by 9.1%, which results in a high performance in terms of individual tooth segmentation. The primary significance of the proposed method is two-fold: 1) the introduction of a point-based tooth detection framework that does not require additional classification and 2) the design of a novel loss function that effectively separates Gaussian distributions based on heatmap responses in the point-based detection framework.Comment: 11 pages, 7 figure

    Volumetric Lung Nodule Segmentation using Adaptive ROI with Multi-View Residual Learning

    Full text link
    Accurate quantification of pulmonary nodules can greatly assist the early diagnosis of lung cancer, which can enhance patient survival possibilities. A number of nodule segmentation techniques have been proposed, however, all of the existing techniques rely on radiologist 3-D volume of interest (VOI) input or use the constant region of interest (ROI) and only investigate the presence of nodule voxels within the given VOI. Such approaches restrain the solutions to investigate the nodule presence outside the given VOI and also include the redundant structures into VOI, which may lead to inaccurate nodule segmentation. In this work, a novel semi-automated approach for 3-D segmentation of nodule in volumetric computerized tomography (CT) lung scans has been proposed. The proposed technique can be segregated into two stages, at the first stage, it takes a 2-D ROI containing the nodule as input and it performs patch-wise investigation along the axial axis with a novel adaptive ROI strategy. The adaptive ROI algorithm enables the solution to dynamically select the ROI for the surrounding slices to investigate the presence of nodule using deep residual U-Net architecture. The first stage provides the initial estimation of nodule which is further utilized to extract the VOI. At the second stage, the extracted VOI is further investigated along the coronal and sagittal axis with two different networks and finally, all the estimated masks are fed into the consensus module to produce the final volumetric segmentation of nodule. The proposed approach has been rigorously evaluated on the LIDC dataset, which is the largest publicly available dataset. The result suggests that the approach is significantly robust and accurate as compared to the previous state of the art techniques.Comment: The manuscript is currently under review and copyright shall be transferred to the publisher upon acceptanc

    Parallelized Seeded Region Growing Using CUDA

    Get PDF
    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests

    MEDS-Net: Self-Distilled Multi-Encoders Network with Bi-Direction Maximum Intensity projections for Lung Nodule Detection

    Full text link
    In this study, we propose a lung nodule detection scheme which fully incorporates the clinic workflow of radiologists. Particularly, we exploit Bi-Directional Maximum intensity projection (MIP) images of various thicknesses (i.e., 3, 5 and 10mm) along with a 3D patch of CT scan, consisting of 10 adjacent slices to feed into self-distillation-based Multi-Encoders Network (MEDS-Net). The proposed architecture first condenses 3D patch input to three channels by using a dense block which consists of dense units which effectively examine the nodule presence from 2D axial slices. This condensed information, along with the forward and backward MIP images, is fed to three different encoders to learn the most meaningful representation, which is forwarded into the decoded block at various levels. At the decoder block, we employ a self-distillation mechanism by connecting the distillation block, which contains five lung nodule detectors. It helps to expedite the convergence and improves the learning ability of the proposed architecture. Finally, the proposed scheme reduces the false positives by complementing the main detector with auxiliary detectors. The proposed scheme has been rigorously evaluated on 888 scans of LUNA16 dataset and obtained a CPM score of 93.6\%. The results demonstrate that incorporating of bi-direction MIP images enables MEDS-Net to effectively distinguish nodules from surroundings which help to achieve the sensitivity of 91.5% and 92.8% with false positives rate of 0.25 and 0.5 per scan, respectively
    corecore